145 research outputs found

    Optimizing automated preprocessing streams for brain morphometric comparisons across multiple primate species

    Get PDF
    INTRODUCTION

MR techniques have delivered images of brains from a wide array of species, ranging from invertebrates to birds to elephants and whales. However, their potential to serve as a basis for comparative brain morphometric investigations has rarely been tapped so far (Christidis and Cox, 2006; Van Essen & Dierker, 2007), which also hampers a deeper understanding of the mechanisms behind structural alterations in neurodevelopmental disorders (Kochunov et al., 2010). One of the reasons for this is the lack of computational tools suitable for morphometrci comparisons across multiple species. In this work, we aim to characterize this gap, taking primates as an example.

METHODS

Using a legacy dataset comprising MR scans from eleven species of haplorhine primates acquired on the same scanner (Rilling & Insel, 1998), we tested different automated processing streams, focusing on denoising and brain segmentation. Newer multi-species datasets are not currently available, so our experiments with this decade-old dataset (which had a very low signal-to-noise ratio by contemporary standards) can serve to highlight the lower boundary of the current possibilities of automated processing pipelines. After manual orientation into Talairach space, an automated bias correction was performed using CARET (Van Essen et al., 2001) before the brains were extracted with FSL BET (Smith, 2002; Fig. 1) and either smoothed by an isotropic Gaussian Kernel, FSL SUSAN (Smith, 1996), an anisotropic diffusion filter (Perona & Malik, 1990), an optimized Rician non-local means filter (Gaser & Coupé, 2010), or not at all (Fig. 2 & 3). Segmentation of the brains (Fig. 2 & 4) was performed separately by either FSL FAST (Zhang, 2001) without atlas priors, or using an Adaptive Maximum A Posteriori Approach (Rajapakse et al., 1997). Finally, the white matter surface was extracted with CARET, and inspected for anatomical and topological correctness. 

RESULTS

Figure 3 shows that noise reduction was generally necessary but, at least for these noisy data, anisotropic filtering (SUSAN, diffusion filter, Rician filter) provided little improvement over simple isotropic filtering. While several segmentations worked well in individual species, our focus was on cross-species optimization of the processing pipeline, and none of the tested segmentations performed uniformly well in all 11 species. The performance could be improved by some of the denoising approaches and by deviating systematically from the default parameters recommended for processing human brains (cf. Fig. 4). Depending on the size of the brains and on the processing path, it took a double-core 2.4GHz iMac from about two minutes (squirrel monkeys) to half an hour (humans) to generate the white matter surface from the T1 image. Nonetheless, the resulting surfaces always necessitated topology correction and - often considerable - manual cleanup. 


CONCLUSIONS

Automated processing pipelines for surface-based morphometry still require considerable adaptations to reach optimal performance across brains of multiple species, even within primates (cf. Fig. 5). However, most contemporary datasets have a better signal-to-noise ratio than the ones used here, which provides for better segmentations and cortical surface reconstructions. Considering further that cross-scanner variability is well below within-species differences (Stonnington, 2008), the prospects look good for comparative evolutionary analyses of cortical parameters, and gyrification in particular. In order to succeed, however, computational efforts on comparative morphometry depend on high-quality imaging data from multiple species being more widely available.

ACKNOWLEDGMENTS

D.M, R.D, & C.G are supported by the German BMBF grant 01EV0709.


REFERENCES

Christidis, P & Cox, RW (2006), A Step-by-Step Guide to Cortical Surface Modeling of the Nonhuman Primate Brain Using FreeSurfer, Proc Human Brain Mapping Annual Meeting, http://afni.nimh.nih.gov/sscc/posters/file.2006-06-01.4536526043 .
Gaser, C & Coupé, P (2010), Impact of Non-local Means filtering on Brain Tissue Segmentation, OHBM 2010, Abstract 1770.
Kochunov, P & al. (2010), Mapping primary gyrogenesis during fetal development in primate brains: high-resolution in utero structural MRI study of fetal brain development in pregnant baboons, Frontiers in Neurogenesis, in press, DOI: 10.3389/fnins.2010.00020.
Perona, P & Malik J (1990), Scale space and edge detection using anisotropic diffusion, IEEE Trans Pattern Anal Machine Intell, vol. 12, no. 7, pp. 629-639.
Rajapakse, JC & al. (1997), Statistical approach to segmentation of single-channel cerebral MR images, IEEE Trans Med Imaging, vol. 16, no. 2, pp. 176-186.
Rilling, JK & Insel TR (1998), Evolution of the cerebellum in primates: differences in relative volume among monkeys, apes and humans. Brain Behav. Evol. 52, 308-314 doi:10.1159/000006575. Dataset available at http://www.fmridc.org/f/fmridc/77.html .
Smith, SM (1996), Flexible filter neighbourhood designation, Proc. 13th Int. Conf. on Pattern Recognition, vol. 1, pp. 206-212.
Smith, SM (2002), Fast robust automated brain extraction, Hum Brain Mapp, vol. 17, no. 3, pp. 143-155.
Stonnington, CM & al. (2008), Interpreting scan data acquired from multiple scanners: a study with Alzheimers disease, Neuroimage, vol. 39, no. 3, pp. 1180-1185.
Van Essen, DC & al. (2001), An Integrated Software System for Surface-based Analyses of Cerebral Cortex, J Am Med Inform Assoc, vol. 8, no. 5, pp. 443-459.
Van Essen, DC & Dierker DL (2007), Surface-based and probabilistic atlases of primate cerebral cortex, Neuron, vol. 56, no. 2, pp. 209-225.
Zhang, Y & al. (2001), Segmentation of brain MR images through a hidden Markov random field model and the expectation maximization algorithm, IEEE Trans Med Imaging, vol. 20, no. 1, pp. 45-57.
&#xa

    Mathematics and Wikidata

    Get PDF
    A contribution to the DMV 2023 conference, presented on 25 September 2023 in Ilmenau. The original sits at https://m.wikidata.org/wiki/Wikidata:WikiProject_Mathematics/Talks/DMV2023/Mathematics_and_Wikidata . Abstract Wikidata is an open and collaborative database that anyone can edit, which thousands do on a regular basis. Launched a decade ago, FAIR (Findable, Accessible, Interoperable and Reusable) right from the start and closely integrated with its sister sites in the Wikipedia ecosystem, it has since become the edit button of the semantic web and is increasingly being integrated with scholarly databases and workflows spanning across all fields of research. This presentation will consider Wikidata through the lense of mathematics, covering content, infrastructure and community aspects and how each of these are curated and interlinked both within and beyond Wikidata. In terms of content, coverage of mathematical concepts will be explored, including objects of mathematical research, software and other methods used in mathematical research, along with mathematical aspects of research in other fields as well as mathematical literature and linguistic knowledge about mathematical terminology across natural languages. In terms of infrastructure, support mechanisms for describing, displaying and analyzing mathematical objects in Wikidata contexts will be discussed. In terms of communities, we will cover producers, curators and users of mathematical knowledge and data, along with community structures engaged in any aspect of the life cycle of mathematical entities, both in scholarly and Wikidata contexts

    Wikis as platforms for scholarly publishing

    Get PDF

    Computational reproducibility of Jupyter notebooks from biomedical publications

    Full text link
    Jupyter notebooks facilitate the bundling of executable code with its documentation and output in one interactive environment, and they represent a popular mechanism to document and share computational workflows. The reproducibility of computational aspects of research is a key component of scientific reproducibility but has not yet been assessed at scale for Jupyter notebooks associated with biomedical publications. We address computational reproducibility at two levels: First, using fully automated workflows, we analyzed the computational reproducibility of Jupyter notebooks related to publications indexed in PubMed Central. We identified such notebooks by mining the articles full text, locating them on GitHub and re-running them in an environment as close to the original as possible. We documented reproduction success and exceptions and explored relationships between notebook reproducibility and variables related to the notebooks or publications. Second, this study represents a reproducibility attempt in and of itself, using essentially the same methodology twice on PubMed Central over two years. Out of 27271 notebooks from 2660 GitHub repositories associated with 3467 articles, 22578 notebooks were written in Python, including 15817 that had their dependencies declared in standard requirement files and that we attempted to re-run automatically. For 10388 of these, all declared dependencies could be installed successfully, and we re-ran them to assess reproducibility. Of these, 1203 notebooks ran through without any errors, including 879 that produced results identical to those reported in the original notebook and 324 for which our results differed from the originally reported ones. Running the other notebooks resulted in exceptions. We zoom in on common problems, highlight trends and discuss potential improvements to Jupyter-related workflows associated with biomedical publications.Comment: arXiv admin note: substantial text overlap with arXiv:2209.0430

    Computational Morphometry for Detecting Changes in Brain Structure Due to Development, Aging, Learning, Disease and Evolution

    Get PDF
    The brain, like any living tissue, is constantly changing in response to genetic and environmental cues and their interaction, leading to changes in brain function and structure, many of which are now in reach of neuroimaging techniques. Computational morphometry on the basis of Magnetic Resonance (MR) images has become the method of choice for studying macroscopic changes of brain structure across time scales. Thanks to computational advances and sophisticated study designs, both the minimal extent of change necessary for detection and, consequently, the minimal periods over which such changes can be detected have been reduced considerably during the last few years. On the other hand, the growing availability of MR images of more and more diverse brain populations also allows more detailed inferences about brain changes that occur over larger time scales, way beyond the duration of an average research project. On this basis, a whole range of issues concerning the structures and functions of the brain are now becoming addressable, thereby providing ample challenges and opportunities for further contributions from neuroinformatics to our understanding of the brain and how it changes over a lifetime and in the course of evolution

    Connecting research-related FAIR Digital Objects with communities of stakeholders

    Get PDF
    The last few years have seen considerable progress in terms of integrating individual elements of the research ecosystem with the so-called FAIR Principles (Wilkinson et al. 2016), a set of guidelines to make research-related resources more findable, accessible, interoperable and reusable (FAIR). This integration process has lots of technical as well as social components and ramifications, some of which resulted in dedicated terms like that of a FAIR Digital Object (FDO) which stands for research objects (e.g. datasets, software, specimens, publications) having at least a minimum level of compliance with the FAIR Principles.As the volume, breadth and depth of FAIR data and the variety of FAIR Digital Objects as well as their use and reuse continue to grow, there is ample opportunity for multi-dimensional interactions between generators, managers, curators, users and reusers of data, and the scope of data quality issues is diversifying accordingly.This poster looks at two ways in which individual collections of FAIR Digital Objects interact with the wider FAIR research landscape. First, it considers communities that curate, generate or use data, metadata or other resources pertaining to individual collections of FAIR Digital Objects. Specifically, which of these community activities are affected by higher or lower compliance of a collection's FDOs with the FAIR Principles? Second, we will consider the case of communities that overlap across FAIR collections - i.e. when some community members are engaged with several collections, possibly through multiple platforms - and what this means in terms of challenges and opportunities for enhancing findability, accessibility, interoperability and reusability between and across FAIR silos

    Scholia and scientometrics with Wikidata

    Get PDF
    Scholia is a tool to handle scientific bibliographic information in Wikidata. The Scholia Web service creates on-the-fly scholarly profiles for researchers, organizations, journals, publishers, individual scholarly works, and for research topics. To collect the data, it queries the SPARQL-based Wikidata Query Service. Among several display formats available in Scholia are lists of publications for individual researchers and organizations, publications per year, employment timelines, as well as co-author networks and citation graphs. The Python package implementing the Web service is also able to format Wikidata bibliographic entries for use in LaTeX/BIBTeX.Comment: 16 pages, 5 figures, Scientometrics 201

    Cortex reorganization of Xenopus laevis eggs in strong static magnetic fields

    Get PDF
    Observations of magnetic field effects on biological systems have often been contradictory. For amphibian eggs, a review of the available literature suggests that part of the discrepancies might be resolved by considering a previously neglected parameter for morphological alterations induced by magnetic fields – the jelly layers that normally surround the egg and are often removed in laboratory studies for easier cell handling. To experimentally test this hypothesis, we observed the morphology of fertilizable Xenopus laevis eggs with and without jelly coat that were subjected to static magnetic fields of up to 9.4 T for different periods of time. A complex reorganization of cortical pigmentation was found in dejellied eggs as a function of the magnetic field and the field exposure time. Initial pigment rearrangements could be observed at about 0.5 T, and less than 3 T are required for the effects to fully develop within two hours. No effect was observed when the jelly layers of the eggs were left intact. These results suggest that the action of magnetic fields might involve cortical pigments or associated cytoskeletal structures normally held in place by the jelly layers and that the presence of the jelly layer should indeed be included in further studies of magnetic field effects in this system

    Collaborative platforms for streamlining workflows in Open Science

    Get PDF
    Despite the internet’s dynamic and collaborative nature, scientists continue to produce grant proposals, lab notebooks, data files, conclusions etc. that stay in static formats or are not published online and therefore not always easily accessible to the interested public. Because of limited adoption of tools that seamlessly integrate all aspects of a research project (conception, data generation, data evaluation, peer-reviewing and publishing of conclusions), much effort is later spent on reproducing or reformatting individual entities before they can be repurposed independently or as parts of articles.

We propose that workflows - performed both individually and collaboratively - could potentially become more efficient if all steps of the research cycle were coherently represented online and the underlying data were formatted, annotated and licensed for reuse. Such a system would accelerate the process of taking projects from conception to publication stages and allow for continuous updating of the data sets and their interpretation as well as their integration into other independent projects.

A major advantage of such workflows is the increased transparency, both with respect to the scientific process as to the contribution of each participant. The latter point is important from a perspective of motivation, as it enables the allocation of reputation, which creates incentives for scientists to contribute to projects. Such workflow platforms offering possibilities to fine-tune the accessibility of their content could gradually pave the path from the current static mode of research presentation into
a more coherent practice of open science
    corecore